83 research outputs found

    Melanocortin peptides inhibit urate crystal-induced activation of phagocytic cells

    Get PDF
    Introduction The melanocortin peptides have marked antiinflammatory potential, primarily through inhibition of proinflammatory cytokine production and action on phagocytic cell functions. Gout is an acute form of arthritis caused by the deposition of urate crystals, in which phagocytic cells and cytokines play a major pathogenic role. We examined whether alpha-melanocyte-stimulating hormone (\u3b1-MSH) and its synthetic derivative (CKPV)2 influence urate crystal-induced monocyte (Mo) activation and neutrophil responses in vitro. Methods Purified Mos were stimulated with monosodium urate (MSU) crystals in the presence or absence of melanocortin peptides. The supernatants were tested for their ability to induce neutrophil activation in terms of chemotaxis, production of reactive oxygen intermediates (ROIs), and membrane expression of CD11b, Toll-like receptor-2 (TLR2) and TLR4. The proinflammatory cytokines interleukin (IL)-1\u3b2, IL-8, and tumor necrosis factor-alpha (TNF-\u3b1) and caspase-1 were determined in the cell-free supernatants. In parallel experiments, purified neutrophils were preincubated overnight with or without melanocortin peptides before the functional assays. Results The supernatants from MSU crystal-stimulated Mos exerted chemoattractant and priming activity on neutrophils, estimated as ROI production and CD11b membrane expression. The supernatants of Mos stimulated with MSU in the presence of melanocortin peptides had less chemoattractant activity for neutrophils and less ability to prime neutrophils for CD11b membrane expression and oxidative burst. MSU crystalstimulated Mos produced significant levels of IL-1\u3b2, IL-8, TNF-\u3b1, and caspase-1. The concentrations of proinflammatory cytokines, but not of caspase-1, were reduced in the supernatants from Mos stimulated by MSU crystals in the presence of melanocortin peptides. Overnight incubation of neutrophils with the peptides significantly inhibited their ability to migrate toward chemotactic supernatants and their capacity to be primed in terms of ROI production. Conclusions \u3b1-MSH and (CKPV)2 have a dual effect on MSU crystal-induced inflammation, inhibiting the Mos' ability to produce neutrophil chemoattractants and activating compounds and preventing the neutrophil responses to these proinflammatory substances. These findings reinforce previous observations on the potential role of \u3b1-MSH and related peptides as a new class of drugs for treatment of inflammatory arthritis

    Compulsory Flow Q-Learning: an RL algorithm for robot navigation based on partial-policy and macro-states

    Get PDF
    Reinforcement Learning is carried out on-line, through trial-and-error interactions of the agent with the environment, which can be very time consuming when considering robots. In this paper we contribute a new learning algorithm, CFQ-Learning, which uses macro-states, a low-resolution discretisation of the state space, and a partial-policy to get around obstacles, both of them based on the complexity of the environment structure. The use of macro-states avoids convergence of algorithms, but can accelerate the learning process. In the other hand, partial-policies can guarantee that an agent fulfils its task, even through macro-state. Experiments show that the CFQ-Learning performs a good balance between policy quality and learning rate.Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES)GRICESFAPESPCNP

    A step towards a reinforcement learning de novo genome assembler

    Full text link
    The use of reinforcement learning has proven to be very promising for solving complex activities without human supervision during their learning process. However, their successful applications are predominantly focused on fictional and entertainment problems - such as games. Based on the above, this work aims to shed light on the application of reinforcement learning to solve this relevant real-world problem, the genome assembly. By expanding the only approach found in the literature that addresses this problem, we carefully explored the aspects of intelligent agent learning, performed by the Q-learning algorithm, to understand its suitability to be applied in scenarios whose characteristics are more similar to those faced by real genome projects. The improvements proposed here include changing the previously proposed reward system and including state space exploration optimization strategies based on dynamic pruning and mutual collaboration with evolutionary computing. These investigations were tried on 23 new environments with larger inputs than those used previously. All these environments are freely available on the internet for the evolution of this research by the scientific community. The results suggest consistent performance progress using the proposed improvements, however, they also demonstrate the limitations of them, especially related to the high dimensionality of state and action spaces. We also present, later, the paths that can be traced to tackle genome assembly efficiently in real scenarios considering recent, successfully reinforcement learning applications - including deep reinforcement learning - from other domains dealing with high-dimensional inputs

    General detection model in cooperative multirobot localization

    Get PDF
    The cooperative multirobot localization problem consists in localizing each robot in a group within the same environment, when robots share information in order to improve localization accuracy. It can be achieved when a robot detects and identifies another one, and measures their relative distance. At this moment, both robots can use detection information to update their own poses beliefs. However some other useful information besides single detection between a pair of robots can be used to update robots poses beliefs as: propagation of a single detection for non participants robots, absence of detections and detection involving more than a pair of robots. A general detection model is proposed in order to aggregate all detection information, addressing the problem of updating poses beliefs in all situations depicted. Experimental results in simulated environment with groups of robots show that the proposed model improves localization accuracy when compared to conventional single detection multirobot localization.FAPESPCNP

    Reinforcement Learning Applied to Trading Systems: A Survey

    Full text link
    Financial domain tasks, such as trading in market exchanges, are challenging and have long attracted researchers. The recent achievements and the consequent notoriety of Reinforcement Learning (RL) have also increased its adoption in trading tasks. RL uses a framework with well-established formal concepts, which raises its attractiveness in learning profitable trading strategies. However, RL use without due attention in the financial area can prevent new researchers from following standards or failing to adopt relevant conceptual guidelines. In this work, we embrace the seminal RL technical fundamentals, concepts, and recommendations to perform a unified, theoretically-grounded examination and comparison of previous research that could serve as a structuring guide for the field of study. A selection of twenty-nine articles was reviewed under our classification that considers RL's most common formulations and design patterns from a large volume of available studies. This classification allowed for precise inspection of the most relevant aspects regarding data input, preprocessing, state and action composition, adopted RL techniques, evaluation setups, and overall results. Our analysis approach organized around fundamental RL concepts allowed for a clear identification of current system design best practices, gaps that require further investigation, and promising research opportunities. Finally, this review attempts to promote the development of this field of study by facilitating researchers' commitment to standards adherence and helping them to avoid straying away from the RL constructs' firm ground.Comment: 38 page

    Markov decision processes for ad network optimization

    Get PDF
    In this paper we examine a central problem in a particular advertising\ud scheme: we are concerned with matching marketing campaigns that produce\ud advertisements (“ads”), to impressions — where “impression” is a general term\ud for any space in the internet that can display an ad. In this paper we propose a\ud new take on the problem by resorting to planning techniques based on Markov\ud Decision Processes, and by resorting to plan generation techniques that have\ud been developed in the AI literature. We present a detailed formulation of the\ud Markov Decision Process approach and results of simulated experimentsAnna Helena Reali Costa and F ́ abio Gagliardi Cozman are partially supported by CNPq. Fl ́ avio Sales Truzzi is supported by CAPES. The work reported here has received sub- stantial support through FAPESP grant 2008/03995-5 and FAPESP grant 2011/19280-

    DEBACER: a method for slicing moderated debates

    Get PDF
    Subjects change frequently in moderated debates with several participants, such as in parliamentary sessions, electoral debates, and trials. Partitioning a debate into blocks with the same subject is essential for understanding. Often a moderator is responsible for defining when a new block begins so that the task of automatically partitioning a moderated debate can focus solely on the moderator's behavior. In this paper, we (i) propose a new algorithm, DEBACER, which partitions moderated debates; (ii) carry out a comparative study between conventional and BERTimbau pipelines; and (iii) validate DEBACER applying it to the minutes of the Assembly of the Republic of Portugal. Our results show the effectiveness of DEBACER.info:eu-repo/semantics/publishedVersio

    Realidade Virtual: Estereoscopia na Educação

    Get PDF
    Realidade virtual (RV) na educação é um tema fortemente presente nas instituições de pesquisas de vários países. Este artigo discute a aplicação de técnicas de RV, incluindo o uso da computação gráfi ca e a produção de vídeos tridimensionais a partir de equipamentos específi cos, porém de baixo custo para instituições de ensino. A estereoscopia atua como ponto chave para a visualização dessas aplicações. Para o desenvolvimento do projeto, são utilizados uma lente 3D, câmera doméstica, projetores de baixo custo, fi ltros de luz polarizados e óculos 3D passivo. O objetivo da produção do vídeo 3D foi o de avaliar desde os processos envolvidos na elaboração de roteiro, gravação e exibição, até os custos necessários para que uma instituição de ensino adote recursos de realidade virtual para o aprimoramento da aprendizagem
    corecore